Prior to Oracle Database 12c, an ASM instance ran on every node in the cluster and ASM Cluster File System (ACFS) Service on a node connected to the local ASM instance running on the same host to fetch the required metadata. If the ASM instance on a node were to fail, then ACFS file systems could no longer be accessed on that node.
With introduction of Flex ASM in Oracle 12c, hard dependency between ASM and its clients has been relaxed and only a smaller number of ASM instances need run on a subset of servers in a cluster. In such a scenario, in order to make ACFS services available on nodes without an ASM instance, a new instance type has been introduced by Flex ASM: the ASM-proxy instance which works on behalf of a real ASM instance. ASM Proxy instance fetches the metadata about ACFS volumes and file systems from an ASM instance and caches it. If ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network to fetch the metadata. Additionally, if the local ASM instance fails, then ASM proxy instance can failover to another surviving ASM instance on a different server resulting in uninterrupted availability of shared storage and ACFS file systems.
Whenever IO needs to be performed on an ACFS, the ASM-Proxy instance passes on the extent map and disk list information to the ADVM driver. Subsequently, this metadata is cached by the ADVM driver. ADVM directs all the ACFS IOs to a specific ASM disk group disk (disks) location, including any mirrored extent updates. In other words, all ACFS IOs are written through the ADVM OS Kernel driver directly to storage. No IOs are delivered through the ASM proxy or ASM instance.
Which nodes can host ASM-Proxy instance?
ASM-Proxy instance only needs to run in the clusters employing Flex ASM on the nodes where access to ACFS is required. Whereas ASM-Proxy instance can run on any node in a standard cluster, only Hub nodes in a Flex cluster can host it (Fig. 1). It can run on the same node as ASM instance or on a different node. It can be shut down when ACFS is not running.
Fig. 1
The ASM Proxy instance has:
- INSTANCE_TYPE initialization parameter set to ASMPROXY
- ORACLE_SID set to +APX<node number>.
In this article, I will demonstrate that with Flex ASM:
- Metadata related to ACFS is cached in ASM-Proxy instance rather than ASM instance
- ASM-Proxy instance obtains the metadata related to ACFS from:
- An ASM instance running locally
- An ASM instance running remotely if the local ASM instance is not running
- Availability of ACFS on a node:
- Requires that ASM proxy instance must be running on that node
- Is not affected by the failure of the local ASM instance
For this demonstration, I have set up a two node 12.1.0.2c Flex Cluster (with Flex ASM).
Overview:
- Create a Cloud File System Resource on DATA disk group.
- Verify that:
- All the ASM and ACFS-related resources are running on both the nodes
- Metadata related to ACFS is cached in ASM-Proxy instance rather than ASM instance
- ASM-Proxy instance obtains the metadata related to ACFS from ASM instance running locally
- Verify that on stopping ASM on a node (host02):
- The ASM-Proxy instance obtains the metadata related to ACFS from an ASM instance running remotely
- Availability of ACFS on the node is not affected
- Verify that on stopping ASM- Proxy instance on a node (host02), ACFS cannot be accessed on that node even if ASM instance is available.
Demonstration:
1 2 3 4 5 6 7 8 9 |
[root@host01 ~]# crsctl get cluster mode status <span style="color: red;"><strong>Cluster is running in "flex" mode</strong></span> [root@host01 ~]# crsctl get node role status -all <span style="color: red;"><strong>Node 'host01' active role is 'hub'</strong></span> <span style="color: red;"><strong>Node 'host02' active role is 'hub'</strong></span> ASMCMD> showclustermode ASM cluster <span style="color: red;"><strong>: Flex mode enabled</strong></span> |
- Create a Cloud File System Resource on DATA disk group:
- Check that Proxy instance is not running presently
1 2 |
[root@host01 ~]# crsctl stat res ora.proxy_advm -t <strong><span style="color: red;">CRS-2613: Could not find resource 'ora.proxy_advm'.</span></strong> |
- Create a volume VOL1 on DATA disk group
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[grid@host01 root]$ asmcmd setattr -G DATA compatible.advm 12.1.0.0.0 [grid@host01 root]$ asmcmd volcreate -G DATA -s 200m VOL1 [grid@host01 root]$ asmcmd volinfo -G DATA VOL1 Diskgroup Name: DATA Volume Name: VOL1 Volume Device: /dev/asm/vol1-104 State: ENABLED Size (MB): 256 Resize Unit (MB): 64 Redundancy: MIRROR Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath: |
- As soon as the volume is created, an ASM proxy instance is automatically started on both nodes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[root@host01 ~]# crsctl stat res ora.proxy_advm -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- <strong>ora.proxy_advm</strong> ONLINE <span style="color: red;"><strong>ONLINE host01</strong></span> STABLE ONLINE <span style="color: red;"><strong>ONLINE host02</strong></span> STABLE [root@<strong>host01</strong> ~]# ps -ef |grep pmon grid 7209 1 0 14:28 ? 00:00:00 asm_pmon_+ASM1 <span style="color: red;"><strong>grid 10076 1 0 14:34 ? 00:00:00 apx_pmon_+APX1</strong></span> root 11297 7103 0 14:37 pts/1 00:00:00 grep pmon [root@<strong>host02</strong> ~]# ps -ef |grep pmon <span style="color: red;"><strong>grid 13901 1 0 14:33 ? 00:00:00 apx_pmon_+APX2</strong></span> root 15113 12648 0 14:37 pts/3 00:00:00 grep pmon grid 16229 1 0 13:13 ? 00:00:00 asm_pmon_+ASM2 grid 20548 1 0 13:16 ? 00:00:00 mdb_pmon_-MGMTDB |
- Create an ACFS File System on the newly created volume VOL1
1 2 3 4 5 6 |
[root@host01 ~]# mkfs -t acfs /dev/asm/vol1-104 mkfs.acfs: version = 12.1.0.2.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/vol1-104 mkfs.acfs: volume size = 268435456 ( 256.00 MB ) mkfs.acfs: Format complete. |
- Create Corresponding Mount Points on both nodes
1 2 |
[root@host01 ~]# mkdir -p /mnt/acfsmounts/acfs1 [root@host02 ~]# mkdir -p /mnt/acfsmounts/acfs1 |
- Configure and start Cloud File System Resource on the volume device VOL1 with the mount point
/mnt/acfsmounts/acfs1
1 2 3 4 |
[root@host01 ~]# srvctl add filesystem -path /mnt/acfsmounts/acfs1 -device /dev/asm/vol1-104 [root@host01 ~]# srvctl start filesystem -d /dev/asm/vol1-104 [root@host01 ~]# srvctl status filesystem -device /dev/asm/vol1-104 <strong><span style="color: red;">ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02</span></strong> |
- Verify the Cloud File System Resource by creating a small text file created on it from host01 and then accessing it successfully from host02
1 2 3 |
[root@<strong>host01</strong> ~]# echo “Test File on ACFS” > /mnt/acfsmounts/acfs1/testfile.txt [root@<strong>host02</strong> asm]# cat /mnt/acfsmounts/acfs1/testfile.txt Test File on ACFS |
- Verify that All the ASM and ACFS-related resources are running on both nodes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
[root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- <strong>ora.DATA.VOL1.advm</strong> ONLINE <span style="color: red;"><strong>ONLINE host01</strong></span> Volume device /dev/a sm/vol1-104 is onlin e,STABLE ONLINE <span style="color: red;"><strong>ONLINE host02</strong></span> Volume device /dev/a sm/vol1-104 is onlin e,STABLE <strong>ora.DATA.dg</strong> ONLINE <span style="color: red;"><strong>ONLINE host01</strong></span> STABLE ONLINE <span style="color: red;"><strong>ONLINE host02</strong></span> STABLE <strong>ora.asm</strong> ONLINE <span style="color: red;"><strong>ONLINE host01</strong></span> Started,STABLE ONLINE <span style="color: red;"><strong>ONLINE host02</strong></span> Started,STABLE <strong>ora.data.vol1.acfs</strong> ONLINE <span style="color: red;"><strong>ONLINE host01</strong></span> mounted on /mnt/acfs mounts/acfs1,STABLE ONLINE <span style="color: red;"><strong>ONLINE host02</strong></span> mounted on /mnt/acfs mounts/acfs1,STABLE |
- Verify that Metadata for ACFS is cached in not cached in ASM instance
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
+ASM1>sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string +ASM1 +ASM1> select name, state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA MOUNTED +ASM1>select fs_name, vol_device from v$ASM_ACFSVOLUMES; <span style="color: red;"><strong>no rows selected</strong></span> +ASM1>select fs_name, state from v$asm_filesystem; <span style="color: red;"><strong>no rows selected</strong></span> +ASM1>select volume_name, volume_device, mountpath from v$asm_volume; VOLUME_NAM VOLUME_DEVICE MOUNTPATH ---------- -------------------- ------------------------- VOL1 /dev/asm/vol1-104 /mnt/acfsmounts/acfs1 |
- Verify that Metadata for ACFS is cached in ASM-Proxy instance:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[grid@host01 ~]$ cat /etc/oratab | grep APX <span style="color: red;"><strong>+APX1</strong></span>:/u01/app/12.1.0/grid:N # line added by Agent [grid@host01 ~]# ps -ef |grep pmon grid 8341 1 0 12:10 ? 00:00:00 asm_pmon_+ASM1 grid 10072 1 0 12:12 ? 00:00:00 <span style="color: red;"><strong>apx_pmon_+APX1</strong></span> grid 10752 1 0 12:13 ? 00:00:00 mdb_pmon_-MGMTDB root 19167 7644 0 12:22 pts/1 00:00:00 grep pmon +APX1> sho parameter instance_type NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_type string <span style="color: red;"><strong>ASMPROXY</strong></span> +APX1> sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string <span style="color: red;"><strong>+APX1</strong></span> +APX1> select name, state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA CONNECTED +APX1>select fs_name, vol_device from v$ASM_ACFSVOLUMES; FS_NAME VOL_DEVICE ------------------------------ -------------------- <span style="color: red;"><strong>/mnt/acfsmounts/acfs1 /dev/asm/vol1-104</strong></span> +APX1>select fs_name, state from v$asm_filesystem; FS_NAME STATE ------------------------------ ------------- <span style="color: red;"><strong>/mnt/acfsmounts/acfs1 AVAILABLE</strong></span> +APX1>select volume_name, volume_device, mountpath from v$asm_volume; VOLUME_NAM VOLUME_DEVICE MOUNTPATH ---------- -------------------- ------------------------- VOL1 /dev/asm/vol1-104 /mnt/acfsmounts/acfs1 |
- Verify that both the ASM-Proxy instances obtain the metadata related to ACFS from ASM instance running locally:
1 2 3 4 5 6 7 8 9 10 11 12 |
+ASM1> SELECT DISTINCT i.instance_name asm_instance_name, i.host_name asm_host_name, c.instance_name client_instance_name, c.status FROM gv$instance i, gv$asm_client c WHERE i.inst_id = c.inst_id; ASM_INSTANCE_NAM ASM_HOST_NAME CLIENT_INSTANCE_NAME STATUS ---------------- -------------------- -------------------- ------------ <span style="color: red;"><strong>+ASM1</strong></span> host01.example.com <span style="color: red;"><strong>+APX1</strong></span> CONNECTED +ASM1 host01.example.com +ASM1 CONNECTED <span style="color: red;"><strong>+ASM2</strong></span> host02.example.com <span style="color: red;"><strong>+APX2</strong></span> CONNECTED |
- Verify that on stopping ASM on a node (host02),
- ASM-Proxy instance obtains the metadata related to ACFS from an ASM instance running remotely
- Availability of ACFS on the node is not affected
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[root@host01 ~]# srvctl stop asm -n host02 PRCR-1014 : Failed to stop resource ora.asm PRCR-1065 : Failed to stop resource ora.asm CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified [root@host01 ~]# srvctl stop asm -n host02 -f [root@host01 ~]# srvctl status asm <span style="color: red;"><strong>ASM is running on host01</strong></span> +ASM1> SELECT DISTINCT i.instance_name asm_instance_name, i.host_name asm_host_name, c.instance_name client_instance_name, c.status FROM gv$instance i, gv$asm_client c WHERE i.inst_id = c.inst_id; ASM_INSTANCE_NAM ASM_HOST_NAME CLIENT_INSTANCE_NAME STATUS ---------------- -------------------- -------------------- ------------ +ASM1 host01.example.com +APX1 CONNECTED <span style="color: red;"><strong>+ASM1</strong></span> host01.example.com <span style="color: red;"><strong>+APX2</strong></span> CONNECTED +ASM1 host01.example.com +ASM1 CONNECTED [root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- <span style="color: red;"><strong>ora.DATA.VOL1.advm</strong></span> ONLINE ONLINE host01 Volume device /dev/a sm/vol1-104 is onlin e,STABLE <span style="color: red;"><strong>ONLINE ONLINE host02</strong></span> Volume device /dev/a sm/vol1-104 is onlin e,STABLE <span style="color: red;"><strong>ora.DATA.dg</strong></span> ONLINE ONLINE host01 STABLE OFFLINE OFFLINE host02 STABLE <span style="color: red;"><strong>ora.data.vol1.acfs</strong></span> ONLINE ONLINE host01 mounted on /mnt/acfs mounts/acfs1,STABLE <span style="color: red;"><strong>ONLINE ONLINE host02 </strong></span> mounted on /mnt/acfs mounts/acfs1,STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- <span style="color: red;"><strong>ora.asm</strong></span> 1 ONLINE ONLINE host01 Started,STABLE <span style="color: red;"><strong>2</strong></span> OFFLINE OFFLINE Instance Shutdown,ST ABLE -------------------------------------------------------------------------------- |
- Verify that on stopping ASM- Proxy instance on a node (host02), ACFS cannot be accessed on that node even though ASM instance is present:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
[root@host01 ~]# srvctl start asm -n host02 [root@host01 ~]# srvctl stop asm -proxy -n host02 PRCR-1014 : Failed to stop resource ora.proxy_advm PRCR-1065 : Failed to stop resource ora.proxy_advm CRS-2529: Unable to act on 'ora.proxy_advm' because that would require stopping or relocating 'ora.DATA.VOL1.advm', but the force option was not specified [root@host01 ~]# srvctl stop asm -proxy -n host02 -f [root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- <span style="color: red;"><strong>ora.DATA.VOL1.advm</strong></span> ONLINE ONLINE host01 Volume device /dev/a sm/vol1-104 is onlin e,STABLE <span style="color: red;"><strong>OFFLINE OFFLINE host02</strong></span> Volume device /dev/a sm/vol1-104 is offli ne,STABLE <span style="color: red;"><strong>ora.DATA.dg</strong></span> ONLINE ONLINE host01 STABLE <span style="color: red;"><strong>ONLINE ONLINE host02</strong></span> STABLE <span style="color: red;"><strong>ora.data.vol1.acfs</strong></span> ONLINE ONLINE host01 mounted on /mnt/acfs mounts/acfs1,STABLE <span style="color: red;"><strong>OFFLINE OFFLINE host02 </strong></span> volume /mnt/acfsmoun ts/acfs1 is unmounte d,STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- <span style="color: red;"><strong>ora.asm</strong></span> 1 ONLINE ONLINE host01 Started,STABLE 2 <span style="color: red;"><strong>ONLINE ONLINE host02</strong></span> Started,STABLE --------------------------------------------------------------------------------- |
Thus in a flex cluster, all metadata about ACFS is cached in ASM-Proxy instance. Also, availability of ACFS Volume and ACFS on a node is dependent on availability of an ASM–Proxy instance only on the node irrespective of the status of local ASM instance.
Summary
- In a cluster with an ASM instance running on every node:
- Metadata related to ACFS is cached in the ASM instance
- Failure of the local ASM instance disrupts the availability of ACFS on that node
- A new instance type has been introduced by Flex ASM – the ASM proxy instance, which gets metadata information from ASM instance on behalf of ACFS.
- In a cluster with flex ASM
- ASM instances run on a subset of nodes
- Metadata related to ACFS is cached in the ASM-Proxy instance rather than the ASM instance
- The ASM-Proxy instance obtains the metadata related to ACFS from the ASM instance running locally / remotely
- Availability of ACFS on a node is dependent on availability of ASM–Proxy instance only irrespective of the status of local ASM instance.
Load comments